Spoken Dialog Challenge 2010: Comparison of Live and Control Test Results

Alan W Black1,  Susanne Burger1,  Alistair Conkie2,  Helen Hastie3,  Simon Keizer4,  Oliver Lemon3,  Nicolas Merigaud3,  Gabriel Parent1,  Gabriel Schubiner1,  Blaise Thomson4,  Jason D. Williams2,  Kai Yu4,  Steve Young4,  Maxine Eskenazi1
1CMU, 2AT&T Labs -- Research, 3Heriot Watt University, 4Cambridge University


Abstract

The Spoken Dialog Challenge 2010 was an exercise to investigate how different spoken dialog systems perform on the same task. The existing Let’s Go Pittsburgh Bus Information System was used as a task and four teams provided systems that were first tested in controlled conditions with speech researchers as users. The three most stable systems were then deployed to real callers. This paper presents the results of the live tests, and compares them with the control test results. Results show considerable variation both between systems and between the control and live tests. Interestingly, relatively high task completion for controlled tests did not always predict relatively high task completion for live tests. Moreover, even though the systems were quite different in their designs, we saw very similar correlations between word error rate and task completion for all the systems. The dialog data collected is available to the research community.